Learning in Games: Robustness of Fast Convergence

 

 

Thodoris Lykouris

Monday, October 24, 2016
4:00pm 122 Gates Hall

Abstract:

We show that learning algorithms satisfying a low approximate regret property experience fast convergence to approximate optimality in a large class of repeated games. Our property, which simply requires that each learner has small regret compared to a (1 + eps )-multiplicative approximation to the best action in hindsight, is ubiquitous among learning algorithms — it is satisfied even by the vanilla Hedge forecaster. Our results improve upon recent work of Syrgkanis et al. [SALS15] in a number of ways. We improve upon the speed of convergence by a factor of n, the number of players, and require only that the players observe payoffs under other players’ realized actions, as opposed to expected payoffs. We further show that convergence occurs with high probability, and under certain conditions show convergence under bandit feedback. Both the scope of settings and the class of algorithms for which our analysis provides fast convergence are considerably broader than in previous work. 

Our framework applies to dynamic population games via a low approximate regret property for shifting experts. Here we strengthen the results of [LST16] in two ways: We allow players to select learning algorithms from a larger class, which includes a minor variant of the basic Hedge algorithm, and we increase the maximum churn in players for which approximate optimality is achieved.

Joint work with Dylan Foster, Zhiyuan Li, Karthik Sridharan, and Eva Tardos.